![]() Method for multi-channel fusion and presentation of virtual learning environment oriented to field p
专利摘要:
The present invention pertains to the field of teaching applications of a virtual reality technology, and provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching. The method includes three steps: content generation, fusion of visual and auditory channels, and multi-channel interaction design. In the present invention, a set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed. By adding an auditory cue, adding a mode of determining multi-channel user interaction, and implementing fusion and presentation of the virtual learning environment, the present invention can enhance realism of the learning environment and improve immersive experience of a participant. 公开号:NL2026359A 申请号:NL2026359 申请日:2020-08-27 公开日:2021-08-17 发明作者:Yang Zongkai;Wu Ke;Zhong Zheng;Wu Di 申请人:Univ Central China Normal; IPC主号:
专利说明:
METHOD FOR MULTI-CHANNEL FUSION AND PRESENTATION OF VIRTUALLEARNING ENVIRONMENT ORIENTED TO FIELD PRACTICE TEACHING TECHNICAL FIELD The present invention pertains to the field of teaching applications of a virtual reality (Virtual Reality, VR) technology, and more specifically, relates to a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching. BACKGROUND Field practice is an important step of practice for training specialists in subjects of geography, geology, biology, and so on, and is an important educational activity for training students to link theory with practice, master basic knowledge and basic skills of the subjects, and improve overall quality and practical and innovative abilities. However, currently, there are many problems in field practice. For example, there is a shortage of teachers having a solid foundation in taxonomy and rich experience in field practice; many students participate in practice within a short time, and it is difficult for a teacher to provide "one-to-one" guidance; due to single practice content and a single practice mode, practice basically stays on recognition of species and acquisition and preparation of specimens, and sharing and interactivity of practice achievements are poor; specimen acquisition conflicts with environmental protection; due to changes of seasons, weather, and biotopes, many things that should be completed in field practice can hardly be completed; in field practice, there are natural disasters such as a torrential flood, a landslide, and a debris flow, and many safety risks such as insect stings and snake bites, falls, and sunstrokes in summer, and consequently field practice effects of the students are affected. Building a field practice environment by applying a VR technology can break time and space limits. A student can immersively carry out field practice indoors, and learning can be repeated infinitely, until an ideal effect is achieved. This will be a beneficial supplement to field practice teaching, and can not only effectively solve many difficulties encountered in field practice, but also greatly enhance interest of the student in learning. As a commercial 5G network is quickly popularized, performance bottlenecks such as an ultra-high resolution, a full view, and a low latency required by VR content will be largely solved. A virtual learning environment oriented to field practice teaching will have broad application prospects. Currently, although a realistic field practice virtual learning environment can be quickly built in a panoramic mode, actual requirements of field practice teaching still cannot be fully satisfied. For example, using field practice of biology as an example, pictures of communication behavior, social behavior, and reproductive behavior between animals may be captured in a VR panoramic video, but it is difficult to convey biological behavior implied therein to a student. For example, a vocal mechanism of an animal, sound signal characteristics, and sound wave reception, processing, and recognition cannot be easily conveyed in a picture. Through multi-camera sound synchronization processing, the student can feel differences between sounds from all directions. Although a natural realistic effect can already be presented by a sound source change in panoramic sound simulation, this is a case in which cameras are relatively fixed, and auditory omnidirectional presentation requirements in field practice teaching can hardly be satisfied. SUMMARY In view of the foregoing disadvantages or improvement requirements of the prior art, the present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, and centering on requirements of virtual simulation teaching in field practice teaching, provides a solution to content generation, audiovisual fusion, and cooperative interaction. A set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed. Objectives of the present invention are achieved by the following technical measures. A method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching includes three steps: content generation, fusion of visual and auditory channels, and multi-channel interaction design. A) Content generation: To satisfy a requirement of field practice teaching, complete VR panoramic content acquisition in a practice area by using a combination of aerial photographing and terrestrial acquisition, establish an organizational mode of knowledge elements in different layers and areas in a virtual learning environment, and complete optimization of a scene-to-scene jumping effect. A-a) Data acquisition: To reproduce a field practice teaching process realistically, acquire teaching information in a field practice area from two layers—terrestrial observation points and aerial photographing areas, and complete digitization in a VR panoramic video mode. A-a-i) Acquisition of terrestrial observation point information: For terrestrial observation practice content, use a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle field real information acquisition, and obtain complete material information of a field practice scene. A-a-ii) Acquisition of aerial photographing information by using an unmanned aerial vehicle: For practice content such as observation of an aerial view and vertical distribution of biotopes in a macro-scale field practice area, take photos of biotopes of aerial photographing areas in different ecotopes by using the unmanned aerial vehicle, to obtain material information of a full field of vision. A-a-iii) Mapping therebetween, where an acquisition point of aerial photographing by the unmanned aerial vehicle needs to correspond to content of a terrestrial observation point, that is, when panoramic aerial photographing content in one area is acquired once, information data of a plurality of terrestrial observation points needs to be acquired correspondingly. A-b) Data organization: Establish an aggregation mode between knowledge elements in different layers and different areas according to a progressive relationship and an association between teaching content; and fuse subject knowledge content and a practice route according to a field practice process routine. A-b-i) Acquisition point annotation: Use an electronic map as a basic geographic data platform, use different symbols to represent VR panoramic acquisition points of terrestrial observation and aerial photographing by the unmanned aerial vehicle, and annotate the VR panoramic acquisition points on the electronic map according to spatial positions. A-b-ii) Vertical association: Establish an association relationship between an aerial photographing scene and a terrestrial acquisition point in the virtual learning environment by using a pyramid hierarchical structure model, and implement fast switching from a macro scene to a micro object. A-b-iii) Horizontal association: In a sandbox model of a terrain and landform of a practice area, combine ecotope aerial photographing points, terrestrial observation points, and subject knowledge points according to a moving route of field practice, to form different survey routes. A-c) Scene transition: In field practice teaching, an association exists between an internship site and content. To reduce dizziness of a student in a VR scene switching process, for a mutual relationship between an internship site and content, design a solution to optimization of a scene-to-scene jumping and switching effect. A-c-i) Guiding element design: An interactive interface of the virtual learning environment changes from a two-dimensional plane to a three-dimensional sphere, which is beyond a limit of a conventional display screen. Therefore, media navigation information such as a text, symbol, and voice is designed to guide the student to a broader field of vision and let the student pay attention to important learning content. A-c-ii) Scene switching: According to geographically relative positions of two scenes, add an indicative icon of a target switching point to a previous scene as an entry for jumping to a next scene, where a pattern of the icon may be designed according to a scene background. 5 A-c-iii) Transition optimization: With respect to a great difference in picture color, brightness, or content during scene switching, use similar fusion, gradient fusion, and highlighting modes to solve a phenomenon of a visual mutation. B) Fusion of visual and auditory channels: Represent attenuation of a learning object and a background sound source in the virtual learning environment by using a linear volume-distance attenuation method, and implement a spatial rendering mode of sounds of different objects in a VR scene; and with reference to a head tracking technology, implement synchronous updating of a panoramic video and sound during moving of a head of the student. B-a) Spatial rendering of an audiovisual combination: Represent attenuation of an object and another background sound source in the virtual learning environment by using the linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implement a spatial rendering mode applicable to sounds of different objects and different background sound effects in the VR scene. B-a-i) Simulation of multiple sound sources: Simulate static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamically changing parameters such as position, direction, attenuation, and Doppler effect, and a background sound effect without parameters such as position and speed. B-a-ii) Mixing of the multiple sound sources: To simulate vocal scenes of objects (such as animals or plants) in a field real environment, mutually fuse spectrums of sounds of different objects, and generate a multi-track mix. B-a-iii) Sound attenuation effect representation: Use a combination of a logarithmic attenuation mode and a linear attenuation mode to reproduce an impact of factors such as a distance and direction in the field real environment on a sound attenuation effect, for example, use the logarithmic attenuation mode for a directional point sound source, and use the linear attenuation mode for the background sound source. B-a-iiii) Binaural positioning: Based on attributes such as sound source motion, direction, position, and structure that are reflected by sound loudness and spectrum characteristics or the like, determine a position of a sound source in the virtual learning environment relative to a position of the student according to a sound propagation principle. B-a-iiiii) Spatial rendering: Considering a Doppler effect, render left and right sound channels with different strength according to the position of the student, and a direction, a distance, and a motion change of the sound source in the virtual learning environment. B-b) Synchronous audio and video updating: With reference to the head tracking technology, support synchronous updating of a video picture and sound during moving of the head of the student in the virtual learning environment, and implement fusion and presentation of the visual and auditory channels. B-b-i) Head and ear synchronization: Track a position and posture of the head of the student in the virtual learning environment in real time according to a refreshing frequency of a VR picture, redetermine the distance and direction of the sound source relative to the student, and implement synchronous rendering of a picture observed by the student and a sound heard by the student. B-b-ii) Audiovisual fusion: Present a content scene in the virtual learning environment according to a teaching requirement, position an angle of view to corresponding content through head turning of the student, and render volume of different sound sources according to a distance between the student and the sound source of the content. B-b-iii) Interference cancellation of the multiple sound sources: For the multiple sound sources in the virtual learning environment, use a sound source attenuation function, and simulate a sound reverberation range, thereby reducing interference factors of the multiple sound sources. C) Multi-channel interaction design: With respect to a requirement of multi-sensory cooperative interaction of the student in the virtual learning environment, screen, determine, decide, and fuse multi-sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task— interactive behavior—interactive experience. C-a) Interactive task design:Achieve orderly participation of interactive behavior by properly designing an interactive task, and form good interactive experience, thereby providing a good mechanism for multi-channel interaction. C-a-i) Interactive task decomposition: During task design, a task needs to be decomposed into a temporal task and a spatial task according to temporal and spatial attributes of the task, and an interactive mode, an objective, an action, a function, and a specific process of the task are designed according to the attributes of the task. C-a-ii) Spatial task design: In comparison with another conventional multimedia learning resource, visual enhancement is an advantage of the virtual learning environment. In a process of designing a spatial interactive task, coherence of visual feedback should be always ensured, and the spatial task should also be executed preferentially during execution. C-a-iii) Temporal task design: Because this type of task has a long time unit, focus on design of auditory channel information, content of which includes background music, a feedback sound effect, and the like, and mainly consider sound information content and accuracy in an output step. C-b) Task decision: After multi-channel information is input, first determine a cooperative relationship therebetween, and complete fusion of the input multi- channel information; and then determine a weight and reliability of each piece of output information, accurately convey feedback information to sensory organs of the student, and complete multi-channel fusion. C-b-i) Input information synthesis: According to input information of channels such as visual, auditory, and tactile channels, determine a cooperative relationship between interactive actions during task execution, and complete synthesis of input information of each channel. C-b-ii) Multi-channel integration: Decide a weight of the input information of each channel to ensure that the output information is accurately conveyed to the student in the virtual learning environment, which forms a condition for multi-channel integration. C-b-iii) Multi-channel fusion: By properly allocating output information of each channel, accurately convey the feedback information to the sensory organs of the student, and complete multi-channel fusion, so that the student obtains good interactive experience. The present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, and centering on requirements of virtual simulation teaching in field practice teaching, provides a solution to content generation, audiovisual fusion, and cooperative interaction. A set of methods for data acquisition, knowledge organization, and scene switching is established according to characteristics of teaching content in a virtual learning environment; synchronous updating of visual and auditory channels is implemented in a spatial rendering mode; and input and output priorities of various interactive channels are evaluated, and multi-sensory cooperative interaction of a student in the virtual learning environment is completed. By adding an auditory cue, adding a mode of determining multi-channel user interaction, and implementing fusion and presentation of the virtual learning environment, the present invention can enhance realism of the learning environment and improve immersive experience of a participant. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a flowchart of a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching according to an embodiment of the present invention; FIG. 2 is a schematic diagram of a correspondence between an aerial photographing area of an unmanned aerial vehicle and a terrestrial observation point according to an embodiment of the present invention; FIG. 3 is a schematic flowchart of designing a scene hotspot according to an embodiment of the present invention; FIG. 4 is a schematic flowchart of designing scene switching according to an embodiment of the present invention; FIG. 5 is a schematic flowchart of audio and video synchronization in a virtual learning environment according to an embodiment of the present invention; FIG. 6 is a schematic flowchart of stereo audio processing according to an embodiment of the present invention; FIG. 7 is a schematic diagram of a volume-distance attenuation mode of a sound source according to an embodiment of the present invention; FIG. 8 is a schematic diagram of division of a sound field reverberation effect of a point sound source according to an embodiment of the present invention; FIG. 9 is a schematic diagram of a binaural positioning model according to an embodiment of the present invention; FIG. 10 is a schematic diagram of a spatial relationship between a student and a sound source according to an embodiment of the present invention; FIG. 11 is a schematic diagram of a layout of learning content according to an embodiment of the present invention; and FIG. 12 is a diagram of task state transition according to an embodiment of the present invention. DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of the present invention more comprehensible, the following describes the present invention in detail with reference to accompanying drawings. As shown in FIG. 1, an embodiment of the present invention provides a method for multi-channel fusion and presentation of a virtual learning environment oriented to field practice teaching, where the method includes the following steps: A) Content generation: Because content generation relates to creation of field practice teaching knowledge in a virtual learning environment, complete VR panoramic content acquisition in a practice area by using a combination of aerial photographing and terrestrial acquisition, establish an organization mode of knowledge elements in different layers and areas in the virtual learning environment, and implement optimization of a scene-to-scene jumping effect. Specifically, the following steps are included: A-a) Data acquisition: To reproduce a field practice teaching process realistically, as shown in FIG. 2, acquire VR panoramic videos according to a teaching requirement and according to different seasons of spring, summer, autumn, and winter, acquire teaching information in a field practice area from two layers— terrestrial observation points and aerial photographing areas, and complete digitization in a VR panoramic video mode. A-a-i) Acquisition of terrestrial observation point information: For terrestrial observation practice content, use a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle field real information acquisition, and obtain related complete panoramic teaching information of a field practice scene. A-a-ii) Acquisition of aerial photographing information by using an unmanned aerial vehicle: For practice content such as observation of an aerial view and vertical distribution of biotopes in a macro-scale field practice area, take 360° photos of biotopes of aerial photographing areas at fixed hovering points (120-500 meters) in the air in different ecotopes by using the unmanned aerial vehicle, to obtain material information of a full field of vision of the practice area. A-a-iii) Mapping therebetween, where an acquisition point of aerial photographing by the unmanned aerial vehicle needs to be associated with content of a terrestrial observation point, that is, an acquisition area of panoramic aerial photographing is associated with panoramic material information of a plurality of terrestrial observation points acquired by the unmanned aerial vehicle in the area. A-b) Data organization: Establish an aggregation mode between knowledge elements in different layers and different areas according to a progressive relationship and an association between teaching content; and fuse subject knowledge content and a practice route according to a field practice process routine. A-b-i) Acquisition point annotation: Because there are a lot of VR panoramic acquisition points, an electronic map may be used as a basic geographic data platform, hotspot and helicopter symbols are used to represent VR panoramic acquisition points of terrestrial observation and aerial photographing by the unmanned aerial vehicle, and the VR panoramic acquisition points are annotated in corresponding spatial positions according to spatial positions. A-b-ii) Vertical association: Represent an association relationship between an acquisition point of aerial photographing and a terrestrial acquisition point in the virtual learning environment by using a pyramid hierarchical structure model, and implement fast switching from a macro scene to a micro object. A-b-iii) Horizontal association: In a sandbox model of a terrain and landform of a practice area, associate content such as ecotope aerial photographing points, terrestrial observation points, and subject knowledge points according to internal logic of knowledge about field practice, to form different survey routes. A-c) Scene transition: In field practice teaching, an association exists between a learning site and content. By fully using associations between different VR scenes, design a solution to optimization of a scene-to-scene jumping and switching effect, to reduce dizziness of a student in a jumping process. A-c-i) Guiding element design: Media navigation information such as a text, symbol, and voice may instruct the student to pay attention to important learning content. FIG. 3 presents a process of designing and adding a hotspot (a planar hotspot and a transparent hot area) in the virtual learning environment. A-c-ii) Scene switching: To implement jumping between a scene 1 and a scene 2 in FIG. 4, first obtain a scene switching point in the scene 1, and then add an indicative icon of the scene 2 to the position as an entry for jumping to the scene 2, where a pattern of the icon should be designed according to a scene background, and a direction and a name of the scene 2 are marked. A-c-iii) Transition optimization: FIG. 4 presents different processing modes used for differences between different scenes during scene switching, that is, processing modes of using fusion displaying for similar scenes, using gradient displaying for scenes with great differences, and using highlighting for stressing a target scene, to solve a phenomenon of a visual mutation. B) Fusion of visual and auditory channels: Complete spatial rendering of visual and audible content in the virtual learning environment by using a workflow shown in FIG. 5; and with reference to a head tracking technology, implement synchronous updating of a sound and a picture during moving of a head of the student. B-a) Spatial rendering of an audiovisual combination: Represent attenuation of a learning object and a background sound source by using a linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implement a spatial rendering mode applicable to sounds of different objects and different background sound effects in a VR scene. B-a-i) Simulation of multiple sound sources: Simulate static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamic change parameters such as position, direction, attenuation, and Doppler effect, and a background sound effect without parameters such as position and speed. B-a-ii) Mixing of the multiple sound sources: To simulate sound generation of objects (such as animals or plants) in a field real environment, obtain sound data of the objects by using a stereo audio processing mode shown in FIG. 6 and really acquiring sample sounds or downloading sounds from an existing audio library, then generate a standard audio file by using audio editing processing software, mutually fuse spectrums of sounds of different objects, and generate a required multi-track VR audio. B-a-iii}) Sound attenuation effect representation: First, considering an impact of a distance on attenuation in the virtual environment, denote a distance between a head center of the student and a sound source as R, express a maximum audible distance as Ama, express maximum volume of the sound source as Vmax, denote attenuated volume as V, and denote an attenuation formula as a formula 1: » Pons X (1 — —) RE Roy v==0 R> Rows (formula 1) Second, to compensate for attenuation differences of different sound sources in the virtual learning environment, set minimum and maximum attenuation distances for the sound sources. (a) A minimum attenuation distance corresponds to maximum volume. If a distance between the sound source and the student is shorter than the minimum attenuation distance, the volume does not change any longer. (b) A maximum attenuation distance corresponds to minimum volume. If the distance is exceeded, a sound generated by the sound source cannot be heard. With reference to the formula 1 and the volume-distance attenuation mode of the sound source (FIG. 7 presents a schematic diagram of an attenuation mode), a sound field reverberation effect of a point sound source is divided into different reverberation areas shown in FIG. 8. In an actual application, according to an attenuation mode of a sound source, for example, a logarithmic attenuation mode used for a directional point sound source, and a linear attenuation mode used for a background sound source, a received reverberation effect is better if the student is closer to the sound source in the scene. B-a-iiii) Binaural positioning: As shown in FIG. 9, based on parameters such as frequency, phase, and amplitude of the sound source, determine horizontal, front- rear, and vertical directions of a sound in the VR environment, and complete direction positioning of the sound source; then determine distance attributes such as motion parallax, loudness, initial time delay, Doppler effect, and reverberation according to attributes such as distance, position, and terrain environment that affect sound propagation, calculate parameters such as distance, speed, and direction of the sound source relative to the student in the virtual learning environment in real time by using a head-related transfer function according to the direction and distance of the sound source, use a convolution operation to process a signal of the sound source, and generate a stereo sound of the sound source. B-a-iiiii) Spatial rendering: According to an initial position, a direction, and a motion speed of the student, considering a Doppler effect, and referring to the position, direction, and motion change of the sound source, obtain a motion track of the sound source in the virtual learning environment; and according to a change of the student relative to the sound source, render left and right sound channels with different volume strength. For example, the motion track of the sound source is from right to left, and the distance is increasingly long. In this case, in a motion process, strength of the right sound channel gradually attenuates, and strength of the left sound channel is gradually reduced, until the sound dies away. B-b) Synchronous audio and video updating: With reference to the head tracking technology, support synchronous updating of a video picture and sound during moving of the head of the student in the virtual learning environment, and implement fusion and presentation of the visual and auditory channels. B-b-i) Head and ear synchronization: Track a position and posture of the head of the student in the virtual learning environment in real time according to a refreshing frequency of a VR picture, as shown in FIG. 10, redetermine the distance and direction of the sound source relative to the student, and implement synchronous rendering of a picture observed by the student and a sound heard by the student. B-b-ii) Audiovisual fusion: Lay out to-be-presented content (FIG. 11 is a content layout diagram in a scene) in the virtual learning environment according to a teaching requirement, position an angle of view to corresponding content through head turning of the student, and render different volume in left and right ears according to a distance between the student and the sound source of the content and the directions of the student and the sound source. B-b-iii) Interference cancellation of the multiple sound sources: For the multiple sound sources in the virtual learning environment, use a sound source attenuation function, and simulate a sound reverberation range model established in step B-a- iii), thereby reducing interference factors of the multiple sound sources. C) Multi-channel interaction design: With respect to a requirement of multi-sensory cooperative interaction of the student in the virtual learning environment, screen, determine, decide, and fuse multi-sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task— interactive behavior—interactive experience. FIG. 12 presents a diagram of task state transition in the virtual learning environment. C-a) Interactive task design: Design an interactive task by using a history of plant growth as an example. The student participates in behavioral interaction in a process of sprouting, blooming, shape changing, and defoliating sequentially, and forms good interactive experience. In this way, a good mechanism is provided for visual, auditory, and tactile channel interaction. C-a-i) Interactive task decomposition: During task design, design a number, an attribute, an objective, an input action, and a task result of a task according to temporal and spatial attributes, and determine a function and a specific process of the task. For example, in a bee pollination process, define number: 01; task attribute: spatial task; task objective: to complete pollination; task action (input): a bee searches for pollen; task result (output): contact with pollen; and task result (output) after a same task action or operation is input in a temporal task associated with the task of this number: buzz. C-a-ii) Spatial task design: In a process of designing a spatial interactive task, coherence of visual feedback should be always ensured, and the spatial task should also be executed preferentially during execution. In the task of searching for pollen, flying actions of the bee in a bee model should be coherent and natural without jumps, the flying actions of the bee are also first presented in the execution process, and then a sound effect of the temporal task is played. C-a-iii) Temporal task design: Because this type of task has a long time unit, focus on design of auditory channel information, content of which includes background music, a feedback sound effect, and the like, and mainly consider sound information content and accuracy in an output step. The buzz of the bee should be real and accurate when it is output. C-b) Task decision: After multi-channel information is input, first determine a cooperative relationship therebetween, and complete fusion of the input multi- channel information; and then determine a weight and reliability of each piece of output information, accurately convey feedback information to visual, auditory, and tactile sensory organs of the student, and complete multi-channel fusion. C-b-i) Input information synthesis: According to input information of channels such as visual, auditory, and tactile channels, determine a cooperative relationship between interactive actions during task execution, and complete (concurrently or sequentially execute) synthesis of input information of each channel. C-b-ii) Multi-channel integration: Decide a weight of the input information (such as gaze interaction, gesture input, and language recognition) of each channel to ensure that the output information is accurately conveyed to the student in the virtual learning environment, which forms a condition for multi-channel integration. C-b-iiiy Multi-channel fusion: By properly allocating output information of each channel, complete cooperative feedback of each channel in time; because a motion offset of visual imaging occurs in a process of hearing a sound, increase a weight of visual information of the bee according to a theory about multi-channel fusion, and let the student weaken an auditory feeling by highlighting visual dominance; and in sound feedback, design the buzz of the bee from weak to strong, so that sound feedback is reliable in time. The multi-channel fusion comprehensively considering task time and space can enable the student to obtain good interactive experience. What is not described in detail in this specification pertains to the prior art well known to a person skilled in the art. The foregoing descriptions are merely exemplary embodiments of the present invention, but are not intended to limit the present invention. Any modification, equivalent replacement, and improvement made without departing from the spirit and principle of the present invention shall fall within the protection scope of the present invention.
权利要求:
Claims (4) [1] A method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented towards practical education, the method comprising the steps of; A) the content generation: completing VR panoramic content acquisition in a practice area by using aerial photography and terrestrial acquisition, setting up an organization mode of knowledge elements in different layers and areas in a virtual learning environment, and completing the optimization of a scene-to-scene jump effect; B) fusing visual and auditory channels: representing attenuation of a learning object and a background sound source in the virtual learning environment by using a linear volume-spacing attenuation method, and implementing a spatial representation mode of sounds from different objects in a VR scene; and with reference to a head tracking technology, implementing synchronous updating of a panoramic video and sound while moving a head of a student; and C) multi-channel interaction design: regarding a requirement of multi-sensory cooperative interaction of the student in the virtual learning environment, screening, determining, deciding, and fusing multi-sensory interactive behavior according to corresponding parameters of interactive objects in an order of interactive task — interactive behavior — interactive experience. [2] The method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented to practical education according to claim 1, wherein the content generation in step A) specifically comprises the steps of: Aa) acquiring data: for realistically reproducing a field practice education, acquiring educational information in a two-layer field practice area — terrestrial observation points and aerial photography areas, and completing the digitization in a panoramic VR video mode; Aai) acquiring terrestrial observation point information: for terrestrial observation practice content, using a high-definition motion camera group to capture dynamic images from all angles, implement high-density multi-angle real-information acquisition, and obtain full material information from a field exercise -scene; Aa-ii) acquiring aerial information by using an unmanned aerial vehicle: to observe an aerial view and vertical distribution of biotopes in a macro-scale training area, taking pictures of biotopes of aerial image areas in different ecotopes by using from the unmanned aerial vehicle, to obtain material information from a full field of view; aa-iii) mapping therebetween, where an aerial photography acquisition point by the unmanned aerial vehicle must correspond to the content of a terrestrial observation point, that is, when a panoramic aerial content is acquired in one area, information data of a plurality of terrestrial observation points must be acquired accordingly; A-b) data organization: establishing a mode of aggregation between knowledge elements in different layers and different areas according to a progressive relationship and a relationship between educational content; and fusing expertise and a practice route according to a field practice routine; Abi) acquisition point annotation: using an electronic map as a basic geographic data platform, using different symbols to display VR panoramic acquisition points of terrestrial observations and aerial photography by the unmanned aerial vehicle, and annotating the VR panoramic acquisition points on the electronic map according to the spatial positions; Ab-ii) vertical association: establishing a relationship between an aerial scene and a terrestrial acquisition point in the virtual learning environment by using a pyramid-hierarchical structure model, and implementing a quick switch from a macro scene to a micro - object; A-b-iil) horizontal association: in a sandbox model of a terrain and landform of a practice area, combining ecotope aerial photo points, terrestrial observation points, and subject knowledge points according to a moving route of practice practice, to form different research routes; A-c) scene transition: for an interrelationship between an internship and content, design a solution for optimization of scene-to-scene jumping and switching effect; A-c-i) controlling element design, where an interactive interface of the virtual learning environment changes from a two-dimensional plane to a three-dimensional area; and media navigation information such as a text, symbol, and voice is designed to direct the student to a wider field of view; A-c-ii) scene switching: according to the geographically relative position of two scenes, adding an indicative icon of a target switching point to a previous scene as input to jump to the next scene; and A-c-iii) Transition Optimization: With respect to a large difference in image color, brightness, or content during scene switching, using similar fusion, gradient fusion, and highlight modes to resolve a visual mutation phenomenon. [3] The method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented to practical education according to claim 1, wherein the fusion of visual and auditory channels in step B) specifically comprises the steps of: Ba) spatially representing a audiovisual combination: representing an attenuation of an object and another background sound source in the virtual learning environment by using the linear volume-distance attenuation method in combination with a binaural positioning audio technology based on a Doppler effect model, and implementing a spatial rendering mode applicable to sounds of different objects and different background sound effects in the VR scene; B-a-i) simulating multiple sound sources: simulating static and dynamic point sound sources of corresponding objects in the virtual learning environment according to dynamically changing parameters of position, direction, attenuation, and Doppler effect, and a background sound effect without position and velocity parameters; B-a-ii) mixing the multiple sound sources: to simulate vocal scenes of objects in a real field environment, by fusing spectra of sounds from different objects together and generating a multi-track mix; Ba-iii) Reproducing sound attenuation effects: using a combination of a logarithmic attenuation mode and a linear attenuation mode to reproduce an impact of distance and direction factors in the real field environment on a sound attenuation effect, that is, using the logarithmic attenuation mode for a direction point sound source, and using linear attenuation mode for the background sound source; Ba-iiii) binaural positioning: based on the motion, direction, position, and structural attributes of the sound source reflected from the loudness of the sound and spectrum characteristics, determining a position of a sound source in the virtual learning environment relative to a position of the student according to a sound propagation principle; B-a-iiiif) spatial rendering: taking into account a Doppler effect, rendering left and right sound channels with different strengths depending on the position of the student, and a direction, a distance, and a movement change of the sound source in the virtual learning environment; Bb) the synchronous updating of audio and video: with reference to the head tracking technology, supporting synchronous updating of a video image and sound while moving the student's head in the virtual learning environment, and implementing fusion and fabrication of the visual and auditory channels; Bbi) synchronizing the head and ear: tracing in real time a position and posture of the student's head in the virtual learning environment based on a refresh rate of a VR image, redetermining the distance and direction of the sound source relative to the student, and implementing synchronous display of an image perceived by the student and a sound heard by the student; Bb-ii) audiovisual fusing: presenting a content scene in the virtual learning environment according to a teaching requirement, positioning a viewing angle relative to corresponding content by turning the student's head, and displaying the volume of different sound sources based on from a distance between the student and the sound source of the content; and B-b-iii) suppressing interference from the multiple sound sources: for the multiple sound sources in the virtual learning environment, using a sound source attenuation function, and simulation of a sound reverberation range, thereby reducing the interference factors of the multiple sound sources. [4] The method for multi-channel fusion and creation of a virtual learning environment, which learning environment is oriented to practical education according to claim 1, wherein the multi-channel interaction design in step C) comprises specifically: Ca) interactive task design: achieving an orderly participation in interactive behavior and forming a good interactive experience, thereby providing a good mechanism for multi-channel interaction; Cai) Interactive task parsing, in which, during task design, a task should be split into a temporal and a spatial task according to the temporal and spatial characteristics of the task, and an interactive mode, an objective, an action, a function, and a specific process of the task are designed according to the characteristics of the task; C-a-ii) spatial task design: when designing a spatial interactive task, coherence of visual feedback should be ensured, and the spatial task should preferably also be performed during execution; C-a-iii) Temporary task design: focusing on design of auditory channel information, the content of which includes background music and a feedback sound effect, and mainly consider the sound information content and accuracy in an output step; Cb) task decision: after entering multi-channel information, first determine a cooperative relationship between them, and complete the fusion of the input multi-channel information, and then determine a weighting and reliability of each piece of output information, accurately convey feedback information to sensory organs of the student, and complete multi-channel fusion; C-b-i) input information synthesis: according to the input information from visual, auditory, and tactile channels, determining a cooperative relationship between interactive actions during task execution, and completing synthesis of input information from each channel; C-b-ii) multi-channel integration: determining a weighting of the input information of each channel to ensure that the output information is accurately passed to the student in the virtual learning environment, which is a prerequisite for the multi-channel integration; and C-b-ili) multi-channel fusion: by correctly assigning output information of each channel, accurately transmitting the feedback information to the student's sensory organs, and completing the multi-channel fusion, so that the student can get a good interactive experience.
类似技术:
公开号 | 公开日 | 专利标题 NL2026359A|2021-08-17|Method for multi-channel fusion and presentation of virtual learning environment oriented to field practice teaching Lange2011|99 volumes later: We can visualise. Now what? Bulman et al.2004|Mixed reality applications in urban environments Ibanez et al.2011|Learning a foreign language in a mixed-reality environment CN103035136A|2013-04-10|Comprehensive electrified education system for teaching of tourism major CN103258338A|2013-08-21|Method and system for driving simulated virtual environments with real data Siang et al.2017|Interactive holographic application using augmented reality EduCard and 3D holographic pyramid for interactive and immersive learning Birchfield et al.2006|SMALLab: a mediated platform for education Lei et al.2017|Construction of urban design support system using cloud computing type virtual reality and case study Nikolai et al.2019|Active learning and teaching through digital technology and live performance:‘choreographic thinking’as art practice in the tertiary sector Bazzaza et al.2016|Impact of smart immersive mobile learning in language literacy education CN112783320A|2021-05-11|Immersive virtual reality case teaching display method and system Aurelia et al.2014|A survey on mobile augmented reality based interactive storytelling Moural et al.2019|User experience in mobile virtual reality: an on-site experience Oprean et al.2018|Remote studio site experiences: Investigating the potential to develop the immersive site visit JP2018049305A|2018-03-29|Communication method, computer program and device De Paolis et al.2019|Augmented Reality, Virtual Reality, and Computer Graphics: 6th International Conference, AVR 2019, Santa Maria al Bagno, Italy, June 24–27, 2019, Proceedings, Part II CN113593351B|2021-12-17|Working method of three-dimensional comprehensive teaching field system Warvik2019|Visualizing climate change in Virtual Reality to provoke behavior change Nykänen et al.2019|Rendering Environmental Noise Planning Models in Virtual Reality CN206684987U|2017-11-28|Teacher develops school's simulative training platform JP6733027B1|2020-07-29|Content control system, content control method, and content control program Ye et al.2016|The Application Research of 3D Immersed Virtual Reality Interactive System in Landscape Architecture Design Course Majernik2013|3D Virtual Projection and Utilization of Its Outputs in Education of Human Anatomy Qian2020|Application of VR Technology in Presentation of Campus Architecture Animation—A Case Study of City College of WUST
同族专利:
公开号 | 公开日 CN111009158A|2020-04-14| CN111009158B|2020-09-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2005024756A1|2003-09-07|2005-03-17|Yiyu Cai|Molecular studio for virtual protein lab| CN102637073B|2012-02-22|2014-12-24|中国科学院微电子研究所|Method for realizing man-machine interaction on three-dimensional animation engine lower layer| CN102945564A|2012-10-16|2013-02-27|上海大学|True 3D modeling system and method based on video perspective type augmented reality| CN104599243B|2014-12-11|2017-05-31|北京航空航天大学|A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic| CN106157359B|2015-04-23|2020-03-10|中国科学院宁波材料技术与工程研究所|Design method of virtual scene experience system| US10586469B2|2015-06-08|2020-03-10|STRIVR Labs, Inc.|Training using virtual reality| CN106484123A|2016-11-11|2017-03-08|上海远鉴信息科技有限公司|User's transfer approach and system in virtual reality| CN107817895B|2017-09-26|2021-01-05|微幻科技有限公司|Scene switching method and device| CN110427103A|2019-07-10|2019-11-08|佛山科学技术学院|A kind of virtual reality fusion emulation experiment multi-modal interaction method and system|CN112783320A|2020-10-21|2021-05-11|中山大学|Immersive virtual reality case teaching display method and system| CN113096252B|2021-03-05|2021-11-02|华中师范大学|Multi-movement mechanism fusion method in hybrid enhanced teaching scene| CN113408798A|2021-06-14|2021-09-17|华中师范大学|Barrier-free VR teaching resource color optimization method for people with abnormal color vision|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 CN201911312490.8A|CN111009158B|2019-12-18|2019-12-18|Virtual learning environment multi-channel fusion display method for field practice teaching| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|